Goto

Collaborating Authors

 turbulence model


Predicting Onflow Parameters Using Transfer Learning for Domain and Task Adaptation

Yilmaz, Emre, Bekemeyer, Philipp

arXiv.org Artificial Intelligence

Determining onflow parameters is crucial from the perspectives of wind tunnel testing and regular flight and wind turbine operations. These parameters have traditionally been predicted via direct measurements which might lead to challenges in case of sensor faults. Alternatively, a data-driven prediction model based on surface pressure data can be used to determine these parameters. It is essential that such predictors achieve close to real-time learning as dictated by practical applications such as monitoring wind tunnel operations or learning the variations in aerodynamic performance of aerospace and wind energy systems. To overcome the challenges caused by changes in the data distribution as well as in adapting to a new prediction task, we propose a transfer learning methodology to predict the onflow parameters, specifically angle of attack and onflow speed. It requires first training a convolutional neural network (ConvNet) model offline for the core prediction task, then freezing the weights of this model except the selected layers preceding the output node, and finally executing transfer learning by retraining these layers. A demonstration of this approach is provided using steady CFD analysis data for an airfoil for i) domain adaptation where transfer learning is performed with data from a target domain having different data distribution than the source domain and ii) task adaptation where the prediction task is changed. Further exploration on the influence of noisy data, performance on an extended domain, and trade studies varying sampling sizes and architectures are provided. Results successfully demonstrate the potential of the approach for adaptation to changing data distribution, domain extension, and task update while the application for noisy data is concluded to be not as effective.


Bayesian inference of mean velocity fields and turbulence models from flow MRI

Kontogiannis, A., Nair, P., Loecher, M., Ennis, D. B., Marsden, A., Juniper, M. P.

arXiv.org Artificial Intelligence

We solve a Bayesian inverse Reynolds-averaged Navier-Stokes (RANS) problem that assimilates mean flow data by jointly reconstructing the mean flow field and learning its unknown RANS parameters. We devise an algorithm that learns the most likely parameters of an algebraic effective viscosity model, and estimates their uncertainties, from mean flow data of a turbulent flow. We conduct a flow MRI experiment to obtain mean flow data of a confined turbulent jet in an idealized medical device known as the FDA (Food and Drug Administration) nozzle. The algorithm successfully reconstructs the mean flow field and learns the most likely turbulence model parameters without overfitting. The methodology accepts any turbulence model, be it algebraic (explicit) or multi-equation (implicit), as long as the model is differentiable, and naturally extends to unsteady turbulent flows.


Using Parametric PINNs for Predicting Internal and External Turbulent Flows

Ghosh, Shinjan, Chakraborty, Amit, Brikis, Georgia Olympia, Dey, Biswadip

arXiv.org Artificial Intelligence

Computational fluid dynamics (CFD) solvers employing two-equation eddy viscosity models are the industry standard for simulating turbulent flows using the Reynolds-averaged Navier-Stokes (RANS) formulation. While these methods are computationally less expensive than direct numerical simulations, they can still incur significant computational costs to achieve the desired accuracy. In this context, physics-informed neural networks (PINNs) offer a promising approach for developing parametric surrogate models that leverage both existing, but limited CFD solutions and the governing differential equations to predict simulation outcomes in a computationally efficient, differentiable, and near real-time manner. In this work, we build upon the previously proposed RANS-PINN framework, which only focused on predicting flow over a cylinder. To investigate the efficacy of RANS-PINN as a viable approach to building parametric surrogate models, we investigate its accuracy in predicting relevant turbulent flow variables for both internal and external flows. To ensure training convergence with a more complex loss function, we adopt a novel sampling approach that exploits the domain geometry to ensure a proper balance among the contributions from various regions within the solution domain. The effectiveness of this framework is then demonstrated for two scenarios that represent a broad class of internal and external flow problems.


A Predictive Surrogate Model for Heat Transfer of an Impinging Jet on a Concave Surface

Salavatidezfouli, Sajad, Rakhsha, Saeid, Sheidani, Armin, Stabile, Giovanni, Rozza, Gianluigi

arXiv.org Artificial Intelligence

This paper aims to comprehensively investigate the efficacy of various Model Order Reduction (MOR) and deep learning techniques in predicting heat transfer in a pulsed jet impinging on a concave surface. Expanding on the previous experimental and numerical research involving pulsed circular jets, this investigation extends to evaluate Predictive Surrogate Models (PSM) for heat transfer across various jet characteristics. To this end, this work introduces two predictive approaches, one employing a Fast Fourier Transformation augmented Artificial Neural Network (FFT-ANN) for predicting the average Nusselt number under constant-frequency scenarios. Moreover, the investigation introduces the Proper Orthogonal Decomposition and Long Short-Term Memory (POD-LSTM) approach for random-frequency impingement jets. The POD-LSTM method proves to be a robust solution for predicting the local heat transfer rate under random-frequency impingement scenarios, capturing both the trend and value of temporal modes. The comparison of these approaches highlights the versatility and efficacy of advanced machine learning techniques in modelling complex heat transfer phenomena.


RANS-PINN based Simulation Surrogates for Predicting Turbulent Flows

Ghosh, Shinjan, Chakraborty, Amit, Brikis, Georgia Olympia, Dey, Biswadip

arXiv.org Artificial Intelligence

Physics-informed neural networks (PINNs) provide a framework to build surrogate models for dynamical systems governed by differential equations. During the learning process, PINNs incorporate a physics-based regularization term within the loss function to enhance generalization performance. Since simulating dynamics controlled by partial differential equations (PDEs) can be computationally expensive, PINNs have gained popularity in learning parametric surrogates for fluid flow problems governed by Navier-Stokes equations. In this work, we introduce RANS-PINN, a modified PINN framework, to predict flow fields (i.e., velocity and pressure) in high Reynolds number turbulent flow regimes. To account for the additional complexity introduced by turbulence, RANS-PINN employs a 2-equation eddy viscosity model based on a Reynolds-averaged Navier-Stokes (RANS) formulation. Furthermore, we adopt a novel training approach that ensures effective initialization and balance among the various components of the loss function. The effectiveness of the RANS-PINN framework is then demonstrated using a parametric PINN.


Differentiable Turbulence II

Shankar, Varun, Maulik, Romit, Viswanathan, Venkatasubramanian

arXiv.org Artificial Intelligence

Differentiable fluid simulators are increasingly demonstrating value as useful tools for developing data-driven models in computational fluid dynamics (CFD). Differentiable turbulence, or the end-to-end training of machine learning (ML) models embedded in CFD solution algorithms, captures both the generalization power and limited upfront cost of physics-based simulations, and the flexibility and automated training of deep learning methods. We develop a framework for integrating deep learning models into a generic finite element numerical scheme for solving the Navier-Stokes equations, applying the technique to learn a sub-grid scale closure using a multi-scale graph neural network. We demonstrate the method on several realizations of flow over a backwards-facing step, testing on both unseen Reynolds numbers and new geometry. We show that the learned closure can achieve accuracy comparable to traditional large eddy simulation on a finer grid that amounts to an equivalent speedup of 10x. As the desire and need for cheaper CFD simulations grows, we see hybrid physics-ML methods as a path forward to be exploited in the near future.


Differentiable Turbulence

Shankar, Varun, Maulik, Romit, Viswanathan, Venkatasubramanian

arXiv.org Artificial Intelligence

Deep learning is increasingly becoming a promising pathway to improving the accuracy of sub-grid scale (SGS) turbulence closure models for large eddy simulations (LES). We leverage the concept of differentiable turbulence, whereby an end-to-end differentiable solver is used in combination with physics-inspired choices of deep learning architectures to learn highly effective and versatile SGS models for two-dimensional turbulent flow. We perform an in-depth analysis of the inductive biases in the chosen architectures, finding that the inclusion of small-scale non-local features is most critical to effective SGS modeling, while large-scale features can improve pointwise accuracy of the a-posteriori solution field. The filtered velocity gradient tensor can be mapped directly to the SGS stress via decomposition of the inputs and outputs into isotropic, deviatoric, and anti-symmetric components. We see that the model can generalize to a variety of flow configurations, including higher and lower Reynolds numbers and different forcing conditions. We show that the differentiable physics paradigm is more successful than offline, a-priori learning, and that hybrid solver-in-the-loop approaches to deep learning offer an ideal balance between computational efficiency, accuracy, and generalization. Our experiments provide physics-based recommendations for deep-learning based SGS modeling for generalizable closure modeling of turbulence.


A probabilistic, data-driven closure model for RANS simulations with aleatoric, model uncertainty

Agrawal, Atul, Koutsourelakis, Phaedon-Stelios

arXiv.org Artificial Intelligence

We propose a data-driven, closure model for Reynolds-averaged Navier-Stokes (RANS) simulations that incorporates aleatoric, model uncertainty. The proposed closure consists of two parts. A parametric one, which utilizes previously proposed, neural-network-based tensor basis functions dependent on the rate of strain and rotation tensor invariants. This is complemented by latent, random variables which account for aleatoric model errors. A fully Bayesian formulation is proposed, combined with a sparsity-inducing prior in order to identify regions in the problem domain where the parametric closure is insufficient and where stochastic corrections to the Reynolds stress tensor are needed. Training is performed using sparse, indirect data, such as mean velocities and pressures, in contrast to the majority of alternatives that require direct Reynolds stress data. For inference and learning, a Stochastic Variational Inference scheme is employed, which is based on Monte Carlo estimates of the pertinent objective in conjunction with the reparametrization trick. This necessitates derivatives of the output of the RANS solver, for which we developed an adjoint-based formulation. In this manner, the parametric sensitivities from the differentiable solver can be combined with the built-in, automatic differentiation capability of the neural network library in order to enable an end-to-end differentiable framework. We demonstrate the capability of the proposed model to produce accurate, probabilistic, predictive estimates for all flow quantities, even in regions where model errors are present, on a separated flow in the backward-facing step benchmark problem.


Sample-Efficient and Surrogate-Based Design Optimization of Underwater Vehicle Hulls

Vardhan, Harsh, Hyde, David, Timalsina, Umesh, Volgyesi, Peter, Sztipanovits, Janos

arXiv.org Artificial Intelligence

Physics simulations are a computational bottleneck in computer-aided design (CAD) optimization processes. Hence, in order to make accurate (computationally expensive) simulations feasible for use in design optimization, one requires either an optimization framework that is highly sample-efficient or fast data-driven proxies (surrogate models) for long running simulations. In this work, we leverage recent advances in optimization and artificial intelligence (AI) to address both of these potential solutions, in the context of designing an optimal unmanned underwater vehicle (UUV). We first investigate and compare the sample efficiency and convergence behavior of different optimization techniques with a standard computational fluid dynamics (CFD) solver in the optimization loop. We then develop a deep neural network (DNN) based surrogate model to approximate drag forces that would otherwise be computed via direct numerical simulation with the CFD solver. The surrogate model is in turn used in the optimization loop of the hull design. Our study finds that the Bayesian Optimization Lower Condition Bound (BO LCB) algorithm is the most sample-efficient optimization framework and has the best convergence behavior of those considered. Subsequently, we show that our DNN-based surrogate model predicts drag force on test data in tight agreement with CFD simulations, with a mean absolute percentage error (MAPE) of 1.85%. Combining these results, we demonstrate a two-orders-of-magnitude speedup (with comparable accuracy) for the design optimization process when the surrogate model is used. To our knowledge, this is the first study applying Bayesian optimization and DNN-based surrogate modeling to the problem of UUV design optimization, and we share our developments as open-source software.


Evaluation of physics constrained data-driven methods for turbulence model uncertainty quantification

Matha, Marcel, Kucharczyk, Karsten, Morsbach, Christian

arXiv.org Artificial Intelligence

In order to achieve a virtual certification process and robust designs for turbomachinery, the uncertainty bounds for Computational Fluid Dynamics have to be known. The formulation of turbulence closure models implies a major source of the overall uncertainty of Reynolds-averaged Navier-Stokes simulations. We discuss the common practice of applying a physics constrained eigenspace perturbation of the Reynolds stress tensor in order to account for the model form uncertainty of turbulence models. Since the basic methodology often leads to overly generous uncertainty estimates, we extend a recent approach of adding a machine learning strategy. The application of a data-driven method is motivated by striving for the detection of flow regions, which are prone to suffer from a lack of turbulence model prediction accuracy. In this way any user input related to choosing the degree of uncertainty is supposed to become obsolete. This work especially investigates an approach, which tries to determine an a priori estimation of prediction confidence, when there is no accurate data available to judge the prediction. The flow around the NACA 4412 airfoil at near-stall conditions demonstrates the successful application of the data-driven eigenspace perturbation framework. Furthermore, we especially highlight the objectives and limitations of the underlying methodology.